Asymmetric Laplace Distribution (laplace_asymmetric)#
The asymmetric Laplace distribution (also called the two-sided exponential) is a continuous distribution with exponential tails on both sides of its mode, but with different decay rates on the left and right.
It is a convenient model for skewed, heavy-tailed noise and appears prominently as a likelihood for quantile regression.
This notebook follows SciPy’s parameterization: scipy.stats.laplace_asymmetric(kappa, loc, scale).
Learning goals#
By the end you should be able to:
Write the PDF/CDF (standard and location-scale forms) and understand the role of
kappa,loc, andscale.Compute and interpret mean, variance, skewness, kurtosis, the MGF/characteristic function, and entropy.
Derive the likelihood and connect it to the quantile-regression check loss.
Sample efficiently with a NumPy-only algorithm and validate results by simulation.
Use SciPy’s
laplace_asymmetricfor evaluation, sampling, and parameter fitting.
import platform
import numpy as np
import plotly.graph_objects as go
import os
import plotly.io as pio
from plotly.subplots import make_subplots
import scipy
from scipy import stats
from scipy.stats import chi2
# Plotly rendering (CKC convention)
pio.templates.default = "plotly_white"
pio.renderers.default = os.environ.get("PLOTLY_RENDERER", "notebook")
# Reproducibility
rng = np.random.default_rng(7)
np.set_printoptions(precision=4, suppress=True)
print("Python", platform.python_version())
print("NumPy", np.__version__)
print("SciPy", scipy.__version__)
Python 3.12.9
NumPy 1.26.2
SciPy 1.15.0
1) Title & Classification#
Name:
laplace_asymmetric(Asymmetric Laplace; SciPy:scipy.stats.laplace_asymmetric)Type: Continuous
Support: \(x \in (-\infty, \infty)\)
Parameter space:
Shape: \(\kappa > 0\)
Location: \(\mathrm{loc} \in \mathbb{R}\)
Scale: \(\mathrm{scale} > 0\)
We’ll often write the standardized variable
$\(
Y = \frac{X-\mathrm{loc}}{\mathrm{scale}},
\)$
so the standardized distribution corresponds to loc=0, scale=1.
2) Intuition & Motivation#
2.1 What it models#
The asymmetric Laplace distribution is a natural model when:
deviations to the left and right of a typical value have different rates (asymmetry), and
tails are heavier than Gaussian but still exponential (robustness to outliers).
A helpful mental picture: it is the distribution you get when you glue together two exponentials at a point (the mode), allowing different decay on each side.
2.2 Typical real-world use cases#
Quantile regression: using an asymmetric Laplace likelihood makes the MLE for the location parameter align with a chosen quantile (more below).
Skewed residuals: economics/finance (returns or spreads), operations (delays), or any setting where errors are not symmetric.
Robust modeling: like Laplace noise but allowing one-sided outliers to be more likely.
2.3 Relations to other distributions#
If \(\kappa = 1\), the distribution reduces to the Laplace (double-exponential) distribution.
Conditional on the sign relative to the mode, the distribution is exponential:
right side decays with rate \(\kappa\) (in standardized form),
left side decays with rate \(1/\kappa\).
Generative representation (standardized form): $\( X = Y - Z,\quad Y\sim\mathrm{Exp}(\text{rate}=\kappa),\; Z\sim\mathrm{Exp}(\text{rate}=1/\kappa),\; Y\perp Z. \)$ This representation is great for sampling.
3) Formal Definition#
SciPy defines the standardized asymmetric Laplace distribution (with loc=0, scale=1) via the PDF
3.1 CDF (standardized)#
Integrating the PDF gives
Note the jump point is smooth at \(x=0\) (the CDF is continuous) and $\( F(0;\kappa) = \frac{\kappa^2}{1+\kappa^2}. \)$
3.2 Location-scale form#
With loc and scale, SciPy uses the standard location-scale transformation:
$\(
Y = \frac{X-\mathrm{loc}}{\mathrm{scale}}\sim\text{standardized AL}(\kappa).
\)$
So
and similarly for the CDF.
def _validate_kappa_scale(kappa: float, scale: float) -> None:
if kappa <= 0:
raise ValueError("kappa must be > 0")
if scale <= 0:
raise ValueError("scale must be > 0")
def laplace_asymmetric_pdf(
x: np.ndarray,
kappa: float,
loc: float = 0.0,
scale: float = 1.0,
) -> np.ndarray:
"""Asymmetric Laplace PDF (SciPy parameterization) implemented with NumPy."""
_validate_kappa_scale(kappa, scale)
x = np.asarray(x, dtype=float)
y = (x - loc) / scale
c = 1.0 / (kappa + 1.0 / kappa)
core = np.where(y >= 0, np.exp(-kappa * y), np.exp(y / kappa))
return (c / scale) * core
def laplace_asymmetric_logpdf(
x: np.ndarray,
kappa: float,
loc: float = 0.0,
scale: float = 1.0,
) -> np.ndarray:
"""Log-PDF, useful for numerical stability in the tails."""
_validate_kappa_scale(kappa, scale)
x = np.asarray(x, dtype=float)
y = (x - loc) / scale
log_norm = -np.log(scale) - np.log(kappa + 1.0 / kappa)
return log_norm + np.where(y >= 0, -kappa * y, y / kappa)
def laplace_asymmetric_cdf(
x: np.ndarray,
kappa: float,
loc: float = 0.0,
scale: float = 1.0,
) -> np.ndarray:
"""Asymmetric Laplace CDF (SciPy parameterization) implemented with NumPy."""
_validate_kappa_scale(kappa, scale)
x = np.asarray(x, dtype=float)
y = (x - loc) / scale
denom = 1.0 + kappa**2
left = (kappa**2 / denom) * np.exp(y / kappa)
right = 1.0 - np.exp(-kappa * y) / denom
return np.where(y < 0, left, right)
# Sanity checks against SciPy
kappa = 2.0
loc = 0.5
scale = 1.3
x = np.linspace(loc - 20 * scale, loc + 20 * scale, 20_001)
pdf_np = laplace_asymmetric_pdf(x, kappa, loc=loc, scale=scale)
cdf_np = laplace_asymmetric_cdf(x, kappa, loc=loc, scale=scale)
pdf_sp = stats.laplace_asymmetric.pdf(x, kappa, loc=loc, scale=scale)
cdf_sp = stats.laplace_asymmetric.cdf(x, kappa, loc=loc, scale=scale)
print("Approx integral of PDF (trapz):", np.trapz(pdf_np, x))
print("CDF endpoints (NumPy):", float(cdf_np[0]), float(cdf_np[-1]))
print("max |pdf diff|:", float(np.max(np.abs(pdf_np - pdf_sp))))
print("max |cdf diff|:", float(np.max(np.abs(cdf_np - cdf_sp))))
Approx integral of PDF (trapz): 0.9999640133864247
CDF endpoints (NumPy): 3.631994380998788e-05 1.0
max |pdf diff|: 1.1102230246251565e-16
max |cdf diff|: 1.1102230246251565e-16
4) Moments & Properties#
Let \(X \sim \texttt{laplace\_asymmetric}(\kappa, \mathrm{loc}, \mathrm{scale})\) in SciPy’s parameterization.
4.1 Mean and variance#
For the standardized case (loc=0, scale=1):
$\(
\mathbb{E}[X] = \frac{1}{\kappa} - \kappa,\qquad \mathrm{Var}(X)=\kappa^2 + \kappa^{-2}.
\)$
With location and scale: $\( \mathbb{E}[X] = \mathrm{loc} + \mathrm{scale}\left(\frac{1}{\kappa} - \kappa\right),\qquad \mathrm{Var}(X)=\mathrm{scale}^2\left(\kappa^2 + \kappa^{-2}\right). \)$
4.2 Skewness and kurtosis#
Skewness and kurtosis do not depend on loc or scale (for positive scale). In SciPy’s convention, stats(..., moments='k') returns excess kurtosis.
4.3 MGF and characteristic function#
For the standardized distribution, $\( M_X(t)=\mathbb{E}[e^{tX}] = \frac{1}{(\kappa-t)(\kappa^{-1}+t)}, \qquad t\in\left(-\frac{1}{\kappa},\kappa\right). \)$
With location and scale: $\( M_X(t)=\frac{\exp(\mathrm{loc}\,t)}{\bigl(\kappa-\mathrm{scale}\,t\bigr)\bigl(\kappa^{-1}+\mathrm{scale}\,t\bigr)}, \qquad t\in\left(-\frac{1}{\kappa\,\mathrm{scale}},\frac{\kappa}{\mathrm{scale}}\right). \)$
The characteristic function is obtained by substituting \(t\mapsto it\).
4.4 Entropy#
The differential entropy is $\( H(X) = 1 + \log\bigl(\mathrm{scale}(\kappa+\kappa^{-1})\bigr). \)$
4.5 A useful fact: what does loc represent?#
In this parameterization, the density is maximized at loc, so loc is the mode.
Also, $\( F(\mathrm{loc}) = \frac{\kappa^2}{1+\kappa^2}, \)\( so `loc` is a fixed **quantile** that depends on \)\kappa\( (it is the median only when \)\kappa=1$).
def laplace_asymmetric_moments(
kappa: float,
loc: float = 0.0,
scale: float = 1.0,
) -> tuple[float, float, float, float]:
"""Return mean, variance, skewness, and excess kurtosis."""
_validate_kappa_scale(kappa, scale)
mean0 = 1.0 / kappa - kappa
var0 = kappa**2 + 1.0 / (kappa**2)
skew = 2.0 * (1.0 - kappa**6) / (1.0 + kappa**4) ** 1.5
kurt_excess = 6.0 * (1.0 + kappa**8) / (1.0 + kappa**4) ** 2
mean = loc + scale * mean0
var = (scale**2) * var0
return float(mean), float(var), float(skew), float(kurt_excess)
def laplace_asymmetric_entropy(kappa: float, scale: float = 1.0) -> float:
_validate_kappa_scale(kappa, scale)
return float(1.0 + np.log(scale * (kappa + 1.0 / kappa)))
# Compare to SciPy
kappa = 0.7
loc = -1.0
scale = 2.0
mean, var, skew, kurt = laplace_asymmetric_moments(kappa, loc=loc, scale=scale)
mean_sp, var_sp, skew_sp, kurt_sp = stats.laplace_asymmetric.stats(
kappa, loc=loc, scale=scale, moments="mvsk"
)
print("mean (theory, SciPy):", mean, float(mean_sp))
print("variance (theory, SciPy):", var, float(var_sp))
print("skewness (theory, SciPy):", skew, float(skew_sp))
print("kurtosis* (theory, SciPy):", kurt, float(kurt_sp), "(*excess)")
print("entropy (theory, SciPy):", laplace_asymmetric_entropy(kappa, scale), float(stats.laplace_asymmetric.entropy(kappa, scale=scale)))
mean (theory, SciPy): 0.4571428571428573 0.4571428571428573
variance (theory, SciPy): 10.12326530612245 10.123265306122448
skewness (theory, SciPy): 1.2778689469997964 1.2778689469997964
kurtosis* (theory, SciPy): 4.126472849550327 4.126472849550327 (*excess)
entropy (theory, SciPy): 2.4485982444560452 2.4485982444560457
5) Parameter Interpretation#
SciPy uses three parameters:
kappa(shape, \(\kappa>0\)) controls asymmetry.Right side (
x >= loc) decays like \(\exp\{-\kappa (x-\mathrm{loc})/\mathrm{scale}\}\).Left side (
x < loc) decays like \(\exp\{(x-\mathrm{loc})/(\kappa\,\mathrm{scale})\}\).If \(\kappa>1\): left tail is heavier and the mean is below the mode.
If \(\kappa<1\): right tail is heavier and the mean is above the mode.
locshifts the distribution; it is the mode (the kink point where the two exponentials meet).scale(positive) stretches distances fromloclinearly.
A common confusion: some references use a parameter that is the reciprocal of SciPy’s scale. Always check parameterization when moving between sources.
# Shape changes as kappa varies (loc=0, scale=1)
loc = 0.0
scale = 1.0
kappas = [0.5, 1.0, 2.0]
x = np.linspace(-8, 8, 2000)
fig = make_subplots(rows=1, cols=2, subplot_titles=("PDF", "CDF"))
for k in kappas:
fig.add_trace(
go.Scatter(x=x, y=laplace_asymmetric_pdf(x, k, loc=loc, scale=scale), mode="lines", name=f"kappa={k}"),
row=1,
col=1,
)
fig.add_trace(
go.Scatter(x=x, y=laplace_asymmetric_cdf(x, k, loc=loc, scale=scale), mode="lines", name=f"kappa={k}", showlegend=False),
row=1,
col=2,
)
fig.add_vline(x=loc, line_dash="dot", line_color="gray")
fig.update_xaxes(title_text="x", row=1, col=1)
fig.update_xaxes(title_text="x", row=1, col=2)
fig.update_yaxes(title_text="f(x)", row=1, col=1)
fig.update_yaxes(title_text="F(x)", row=1, col=2)
fig.update_layout(title="Asymmetric Laplace: effect of kappa (loc=0, scale=1)")
fig.show()
6) Derivations#
We’ll sketch the key derivations in the standardized case (loc=0, scale=1) and then apply location/scale transformations.
6.1 Expectation#
Using the piecewise PDF with normalization constant \(c = 1/(\kappa+\kappa^{-1})\):
We can use standard Gamma integrals:
so $\( \mathbb{E}[X] = c\left(\frac{1}{\kappa^2} - \kappa^2\right)=\frac{1}{\kappa}-\kappa. \)$
6.2 Variance#
Using the exponential-difference representation \(X = Y - Z\) with \(Y\sim\mathrm{Exp}(\kappa)\) and \(Z\sim\mathrm{Exp}(1/\kappa)\) independent,
6.3 Likelihood#
Given i.i.d. data \(x_1,\dots,x_n\) and parameters \((\kappa,\mathrm{loc},\mathrm{scale})\), the log-likelihood is
Up to constants, the negative log-likelihood is a weighted absolute deviation loss.
Define $\( \tau = \frac{\kappa^2}{1+\kappa^2}. \)\( Then the same loss can be written (up to a positive scalar factor) as the **quantile regression check loss** \)\rho_\tau(u)=u(\tau-\mathbf{1}{u<0})\( applied to residuals \)u=(x-\mathrm{loc})/\mathrm{scale}$.
This is why asymmetric Laplace likelihoods are closely tied to quantile regression.
def laplace_asymmetric_nll(
x: np.ndarray,
kappa: float,
loc: float,
scale: float,
) -> float:
"""Negative log-likelihood for i.i.d. observations x (NumPy implementation)."""
_validate_kappa_scale(kappa, scale)
x = np.asarray(x, dtype=float)
y = (x - loc) / scale
# Weighted absolute deviation term
loss = np.where(y >= 0, kappa * y, -y / kappa)
return float(x.size * (np.log(scale) + np.log(kappa + 1.0 / kappa)) + np.sum(loss))
kappa = 2.0
tau = kappa**2 / (1.0 + kappa**2)
print("For kappa=2, tau = kappa^2/(1+kappa^2) =", tau)
For kappa=2, tau = kappa^2/(1+kappa^2) = 0.8
7) Sampling & Simulation#
NumPy-only algorithm (difference of exponentials)#
Use the representation (standardized form): $\( X = Y - Z,\quad Y\sim\mathrm{Exp}(\text{rate}=\kappa),\; Z\sim\mathrm{Exp}(\text{rate}=1/\kappa),\; Y\perp Z. \)$
Steps:
Sample \(Y\) from an exponential with mean \(1/\kappa\).
Sample \(Z\) from an exponential with mean \(\kappa\).
Return \(X = \mathrm{loc} + \mathrm{scale}\,(Y - Z)\).
This is fast, vectorized, and requires only numpy.random.Generator.exponential.
def laplace_asymmetric_rvs_numpy(
rng: np.random.Generator,
kappa: float,
loc: float = 0.0,
scale: float = 1.0,
size: int | tuple[int, ...] = 1,
) -> np.ndarray:
"""Generate random variates using only NumPy.
Uses: X = loc + scale * (Y - Z),
where Y ~ Exp(rate=kappa) and Z ~ Exp(rate=1/kappa) independent.
"""
_validate_kappa_scale(kappa, scale)
# NumPy parameterizes exponential by its mean (scale = 1/rate).
y = rng.exponential(scale=1.0 / kappa, size=size)
z = rng.exponential(scale=kappa, size=size)
return loc + scale * (y - z)
# Monte Carlo validation: moments
kappa = 2.0
loc = 0.0
scale = 1.0
n = 300_000
x_samp = laplace_asymmetric_rvs_numpy(rng, kappa, loc=loc, scale=scale, size=n)
mean_th, var_th, skew_th, kurt_th = laplace_asymmetric_moments(kappa, loc=loc, scale=scale)
print("sample mean ", float(np.mean(x_samp)), "theory", mean_th)
print("sample var ", float(np.var(x_samp)), "theory", var_th)
print("sample skew ", float(stats.skew(x_samp)), "theory", skew_th)
print("sample kurt* ", float(stats.kurtosis(x_samp, fisher=True)), "theory", kurt_th, "(*excess)")
sample mean -1.498592859519291 theory -1.5
sample var 4.219823735932372 theory 4.25
sample skew -1.77405071546256 theory -1.7976169855634092
sample kurt* 5.110592466580817 theory 5.335640138408304 (*excess)
8) Visualization#
We’ll visualize:
the PDF and CDF for a chosen parameter set
Monte Carlo samples via histogram (PDF overlay)
empirical CDF vs theoretical CDF
kappa = 2.0
loc = 0.0
scale = 1.0
n = 120_000
x_samp = laplace_asymmetric_rvs_numpy(rng, kappa, loc=loc, scale=scale, size=n)
x_grid = np.linspace(-8, 8, 2000)
pdf = laplace_asymmetric_pdf(x_grid, kappa, loc=loc, scale=scale)
cdf = laplace_asymmetric_cdf(x_grid, kappa, loc=loc, scale=scale)
# Empirical CDF
x_sorted = np.sort(x_samp)
ecdf = np.arange(1, n + 1) / n
fig = make_subplots(
rows=2,
cols=2,
subplot_titles=(
"PDF", "Histogram + PDF overlay", "CDF", "Empirical CDF vs theoretical",
),
)
# PDF
fig.add_trace(go.Scatter(x=x_grid, y=pdf, mode="lines", name="pdf"), row=1, col=1)
# Histogram + PDF
fig.add_trace(
go.Histogram(
x=x_samp,
nbinsx=120,
histnorm="probability density",
opacity=0.45,
name="samples",
showlegend=False,
),
row=1,
col=2,
)
fig.add_trace(
go.Scatter(x=x_grid, y=pdf, mode="lines", name="pdf overlay", line=dict(color="black")),
row=1,
col=2,
)
# CDF
fig.add_trace(go.Scatter(x=x_grid, y=cdf, mode="lines", name="cdf", showlegend=False), row=2, col=1)
# Empirical vs theoretical
fig.add_trace(
go.Scatter(x=x_sorted, y=ecdf, mode="lines", name="empirical", showlegend=False),
row=2,
col=2,
)
fig.add_trace(
go.Scatter(x=x_grid, y=cdf, mode="lines", name="theoretical", line=dict(color="black", dash="dash"), showlegend=False),
row=2,
col=2,
)
for r in [1, 2]:
for c in [1, 2]:
fig.update_xaxes(title_text="x", row=r, col=c)
fig.update_yaxes(title_text="f(x)", row=1, col=1)
fig.update_yaxes(title_text="density", row=1, col=2)
fig.update_yaxes(title_text="F(x)", row=2, col=1)
fig.update_yaxes(title_text="probability", row=2, col=2)
fig.update_layout(title=f"Asymmetric Laplace visuals (kappa={kappa}, loc={loc}, scale={scale})")
fig.show()
9) SciPy Integration#
SciPy provides scipy.stats.laplace_asymmetric with the usual rv_continuous interface:
laplace_asymmetric.pdf(x, kappa, loc=0, scale=1)laplace_asymmetric.cdf(x, kappa, loc=0, scale=1)laplace_asymmetric.rvs(kappa, loc=0, scale=1, size=..., random_state=...)laplace_asymmetric.fit(data)(MLE)
As always in SciPy, you can freeze parameters: rv = laplace_asymmetric(kappa, loc=..., scale=...).
from scipy.stats import laplace_asymmetric
kappa_true = 1.7
loc_true = 0.8
scale_true = 0.6
data = laplace_asymmetric.rvs(
kappa_true,
loc=loc_true,
scale=scale_true,
size=5000,
random_state=rng,
)
# Fit returns (kappa_hat, loc_hat, scale_hat)
kappa_hat, loc_hat, scale_hat = laplace_asymmetric.fit(data)
print("true params:", (kappa_true, loc_true, scale_true))
print("fit params:", (float(kappa_hat), float(loc_hat), float(scale_hat)))
# Frozen distribution object
rv = laplace_asymmetric(kappa_hat, loc=loc_hat, scale=scale_hat)
x = np.linspace(np.percentile(data, 0.5), np.percentile(data, 99.5), 800)
pdf_hat = rv.pdf(x)
cdf_hat = rv.cdf(x)
print("pdf(x) shape:", pdf_hat.shape)
print("cdf(x) in [0,1]?:", float(np.min(cdf_hat)), float(np.max(cdf_hat)))
true params: (1.7, 0.8, 0.6)
fit params: (1.7114842962167574, 0.8021232338422095, 0.5919613682845734)
pdf(x) shape: (800,)
cdf(x) in [0,1]?: 0.004820162907619045 0.9954160254634811
10) Statistical Use Cases#
10.1 Hypothesis testing (example: symmetry)#
A simple question is whether the distribution is symmetric, i.e. \(\kappa=1\). One approach is a likelihood ratio test comparing:
\(H_0\): \(\kappa = 1\) (symmetric Laplace)
\(H_1\): \(\kappa\) free
Under standard regularity conditions, the LRT statistic is asymptotically \(\chi^2_1\).
10.2 Bayesian modeling#
The asymmetric Laplace is often used as a likelihood when you want the location parameter to represent a target quantile. With an appropriate mapping between \(\kappa\) and \(\tau\), the negative log-likelihood is proportional to the check loss used in quantile regression.
In a Bayesian setting, you might put priors on:
loc(e.g. normal prior)scale(e.g. half-normal or half-Cauchy)kappa(e.g. log-normal, since \(\kappa>0\))
10.3 Generative modeling#
Asymmetric Laplace noise is a useful alternative to Gaussian noise in generative models when you want robustness (exponential tails) and asymmetry (one-sided outliers more common).
# Likelihood ratio test (LRT) for symmetry: kappa = 1
kappa_true = 2.0
loc_true = 0.2
scale_true = 1.1
n = 3000
x = laplace_asymmetric.rvs(
kappa_true,
loc=loc_true,
scale=scale_true,
size=n,
random_state=rng,
)
# H1: free kappa
k1, loc1, s1 = laplace_asymmetric.fit(x)
ll1 = float(np.sum(laplace_asymmetric.logpdf(x, k1, loc=loc1, scale=s1)))
# H0: fix kappa=1 (SciPy convention: first shape param fixed via f0)
k0, loc0, s0 = laplace_asymmetric.fit(x, f0=1.0)
ll0 = float(np.sum(laplace_asymmetric.logpdf(x, k0, loc=loc0, scale=s0)))
lrt = 2.0 * (ll1 - ll0)
p_value = float(chi2.sf(lrt, df=1))
print("H1 fit (kappa, loc, scale):", float(k1), float(loc1), float(s1))
print("H0 fit (kappa fixed=1): ", float(k0), float(loc0), float(s0))
print("LRT statistic:", lrt)
print("Approx p-value (chi2_1):", p_value)
H1 fit (kappa, loc, scale): 2.010996988292344 0.18640242191121764 1.1263487813287028
H0 fit (kappa fixed=1): 1.0 -0.8838591601696771 1.634036655805251
LRT statistic: 873.9994876189685
Approx p-value (chi2_1): 4.406681390690768e-192
# Connection to quantiles: for fixed kappa and scale, the MLE of loc is a tau-quantile.
kappa = 2.0
scale = 1.0
tau = kappa**2 / (1.0 + kappa**2)
# Data do not have to be ALD for this optimization identity to make sense.
data = rng.normal(loc=1.0, scale=2.0, size=400)
grid = np.linspace(np.percentile(data, 1), np.percentile(data, 99), 500)
nll_vals = np.array([laplace_asymmetric_nll(data, kappa, loc=g, scale=scale) for g in grid])
loc_hat_grid = float(grid[np.argmin(nll_vals)])
loc_tau_quantile = float(np.quantile(data, tau))
print("tau:", tau)
print("argmin NLL over grid:", loc_hat_grid)
print("empirical tau-quantile:", loc_tau_quantile)
fig = go.Figure()
fig.add_trace(go.Scatter(x=grid, y=nll_vals, mode="lines", name="NLL(loc)"))
fig.add_vline(x=loc_hat_grid, line_dash="dash", line_color="green", annotation_text="MLE loc (grid)")
fig.add_vline(x=loc_tau_quantile, line_dash="dot", line_color="red", annotation_text="tau-quantile")
fig.update_layout(
title=f"Asymmetric Laplace NLL as a function of loc (kappa={kappa}, tau={tau:.3f})",
xaxis_title="loc",
yaxis_title="negative log-likelihood",
)
fig.show()
tau: 0.8
argmin NLL over grid: 2.7971737044184
empirical tau-quantile: 2.7943029024378885
11) Pitfalls#
Parameterization mismatches: other sources may swap rate/scale conventions; SciPy notes some references use the reciprocal of
scale.Interpreting
loc: herelocis the mode, not the mean (unless \(\kappa=1\)).Invalid parameters: require
kappa > 0andscale > 0.Numerical issues in tails: prefer
logpdfoverpdfwhen working with extreme values (to avoid underflow).MGF domain: the MGF exists only for \(t\in(-1/(\kappa\,\mathrm{scale}),\kappa/\mathrm{scale})\).
Fitting:
fitis an MLE routine; for small samples or extreme asymmetry it can be unstable. Use diagnostics (QQ plots, residual checks) and consider robust alternatives.
12) Summary#
laplace_asymmetricis a continuous two-sided exponential distribution with asymmetric tails controlled by \(\kappa\).In SciPy’s parameterization,
locis the mode, and the mean isloc + scale*(1/kappa - kappa).Moments are available in closed form; skewness changes sign around \(\kappa=1\).
Sampling is easy via a difference of exponentials (NumPy-only).
The likelihood corresponds (up to constants) to a weighted absolute deviation / quantile regression check loss.
SciPy provides
pdf,cdf,rvs, andfitviascipy.stats.laplace_asymmetric.
References#
SciPy docs:
scipy.stats.laplace_asymmetricWikipedia: “Asymmetric Laplace distribution”
Kozubowski & Podg\u00f3rski (2000): A Multivariate and Asymmetric Generalization of Laplace Distribution